在具有连续以对象的状态,连续的动作,长距离和稀疏反馈的机器人环境中,决策是具有挑战性的。诸如任务和运动计划(TAMP)之类的层次结构方法通过将决策分解为两个或更多级别的抽象来解决这些挑战。在给出演示和符号谓词的环境中,先前的工作已经显示了如何通过手动设计的参数化策略来学习符号操作员和神经采样器。我们的主要贡献是一种与操作员和采样器结合使用的参数化策略的方法。这些组件被包装到模块化神经符号技能中,并与搜索 - 然后样本tamp一起测序以解决新任务。在四个机器人域的实验中,我们表明我们的方法 - 具有神经符号技能的双重计划 - 可以解决具有不同初始状态,目标和对象不同的各种任务,表现优于六个基线和消融。视频:https://youtu.be/pbfzp8rpugg代码:https://tinyurl.com/skill-learning
translated by 谷歌翻译
本文使用空间填充曲线,尤其是Sierpinski曲线对封闭区域进行了空中调查。指定区域是三角剖分的,Sierpinski曲线用于探索每个较小的三角形区域。整个地区可能有一个或多个障碍。提出了一种算法,该算法暗示了如果检测到障碍物,则提出了回避操作(弯路)。该算法在线;也就是说,它不需要事先了解障碍物的位置,并且在机器人系统正在穿越指定路径时可以应用。Sierpinski曲线的分形性质和简单的几何观测值用于制定和验证算法。最终也应对非均匀的覆盖范围和多个障碍问题。
translated by 谷歌翻译
我们在最常用的计算机视觉,自然语言和音频数据集中的10个测试集中识别标签错误,随后研究这些标签错误的可能性影响基准结果。测试集中的错误是众多和广泛的:我们估计10个数据集的至少3.3%的误差,例如标签错误包括至少6%的想象验证集。使用自信的学习算法识别推定的标签错误,然后通过众包(51%的算法上标记的候选者的51%确实错误地标记了数据集)。传统上,机器学习从业者选择基于测试准确性部署哪种模型 - 我们的调查结果在此提出谨慎行事,提出在正确标记的测试集上判断模型可能更有用,特别是对于嘈杂的现实世界数据集。令人惊讶的是,我们发现较低的容量模型可能与现实世界数据集中的更高容量模型几乎更有用,具有高比例的错误标记数据。例如,在具有校正标签的ImageNet上:Reset-18优于Reset-50,如果最初错误标记的测试示例的普及仅增加6%。在具有校正标签的CiFar-10上:VGG-11优于VGG-19,如果最初错误标记的测试示例的患病率达到5%。在HTTPS://labelerrors.com上查看10个数据集中的测试集错误,HTTPS://github.com/cleanlab/labelors可以再现所有标签错误。
translated by 谷歌翻译
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimizationbased attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining noncertified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.
translated by 谷歌翻译
Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.
translated by 谷歌翻译